AI for work tasks
- why are people turning to AI for work tasks that they would previously have done themselves?
Technology is primarily about automation. AI is designed to automate tasks and make organisations more efficient. If organisations are more efficient, then there is an expectation that they will be more productive.
Productivity is something that leaks into our personal lives as well our professional lives. The use of digital tools such as email, slack, kandan, are now something that people increasingly use both at work and at home. Social media is awash with people trying to sell us so-called productivity hacks.
These societal introjects of hyper-capitalism and hyper-individualism influence how we feel about ourselves. If we aren’t busy or productive, then we equate this with not being of value. However, this concept doesn't necessarily translate to people.
- is this potentially putting an extra burden on their colleagues?
There is lots of research explaining the connection between handwriting and memory. EEG data demonstrates that when we handwrite something, it requires considerably more brain activity and connections including: visual, sensory, auditory and motor based.
Although laptops and tablets make it easier for us to capture more information in less time. This doesn’t help us understand, process or recall the same information.
So it follows that when we use AI - that can deliver information at a speed faster than we can type (or read) - we are less likely to process it. This can lead to a lack of innovation, creativity and collaboration.
There is a strong argument that AI is not actually well suited to low level, repetitive tasks and would be much better suited to complex, analytical tasks.
-who is bearing the brunt of this?
As a women’s psychotherapist and coach, I would say that it’s women. On an individual level, it’s largely women’s jobs that are being automated by AI, because women tend to work in more administrative roles. Women are also more likely to work part time than men, which often means that they need more shortcuts to meet the requirements of their roles to ensure that they don’t end up working when they aren’t getting paid (though this often ends up being the case). For example, colleagues who work as lecturers or counselling tutors, regularly report working double the hours they are contracted for. So, again, it’s understandable that women want to find shortcuts or efficiencies in their work.
In September 2025, OpenAI released a report highlighting how for the first time, women were using ChatGPT more than men. Over 70% of their inquiries were categorised as ‘non professional’. In my research for a recent BACP conference, I discovered that all of my clients use chatbots at work and for relationship and mental health support.
My concern is that instead of lightening the invisible mental load that women disproportionately carry, AI tools are contributing to it. Because by relying on AI - at work or at home - we are essentially tacitly agreeing that the load should be ours, rather than seeking to re-balance it.
- many of us think we can spot the signs of AI generated work... but then also hand in work that is AI generated - is there a disconnect/dissonance here?
The dissonance about AI starts in us using it as much as we do at all given the fact that all the major platforms are owned by tech bros, the egregious uses of AI tools in the examples of de-nudification apps, and rape academies, the fact that LLMs are pre-dominantly comprised of white western english data, and the fact that algorithmic bias disproportionately discriminates against women. So it’s an understandable level of extrapolation that this dissonance continues to this specific level. It makes me think about the MeToo movement, when so many of us felt as though once things were made public, systems would change. I remember after the Brett Kavanaugh trial thinking that it’s not that nobody knows, it’s that nobody cares. It feels like there are parallels here with the adoption of AI.
-are we kidding ourselves or just more likely to spot others' mistakes?!
We are definitely more likely to spot other’s mistakes due to the psychological distance between someone else's work and our own. This applies to people of course, but it also applies to AI. The challenge is that many people ignore the obvious limitations of AI, perhaps conflating the power of the tool with its accuracy. Generative AI tools are renowned for their inaccuracy e.g. ‘hallucinations’ - where they cite erroneous or non-existent data - as happened with the Met Police in the example where they banned Tel Aviv fans from coming to a football match. Despite some of the disclaimers around things like medical and mental health advice, a chatbot cannot be wrong. When we ask a chatbot a question, the quality of the sources it analyses are not checked. For example, we could ask a health question and the majority of the response we get back could have come from social media forums. If we don’t interrogate the data sources, then we stand to make serious mistakes at work. But of course, fact checking takes time.
The whole point of any digital platform is to keep us on there for as long as possible and make us dependent on them. This is why chatbots are so compelling. One of the things I talk about when I present at conferences is how it’s important to remember that although a chatbot might be intelligent, it doesn’t have a mind. It doesn’t have any lived experience. It’s never had a cold, broken a bone, experienced grief, or been fired from a job.